Goto

Collaborating Authors

 knowledge worker


Why Can't A.I. Manage My E-Mails?

The New Yorker

Chatbots can pass the Turing test--but they can't yet handle an office worker's inbox. One morning last month, I decided to try artificial intelligence on a dire problem: my inbox. In the past twenty years, the e-mail address I use for writing projects has been discovered by a staggering number of P.R. firms, scammers, and strangers with eccentric requests. On this particular day, I had eight hundred and twenty-nine messages. Of the fifty most recent e-mails, the majority were dreck, but about eight were of actual interest, suggesting a hit rate of sixteen per cent--just enough that I had to worry about missing something important.


Towards Human-Centered RegTech: Unpacking Professionals' Strategies and Needs for Using LLMs Safely

Hu, Siying, Yao, Yaxing, Lu, Zhicong

arXiv.org Artificial Intelligence

Large Language Models are profoundly changing work patterns in high-risk professional domains, yet their application also introduces severe and underexplored compliance risks. To investigate this issue, we conducted semi-structured interviews with 24 highly-skilled knowledge workers from industries such as law, healthcare, and finance. The study found that these experts are commonly concerned about sensitive information leakage, intellectual property infringement, and uncertainty regarding the quality of model outputs. In response, they spontaneously adopt various mitigation strategies, such as actively distorting input data and limiting the details in their prompts. However, the effectiveness of these spontaneous efforts is limited due to a lack of specific compliance guidance and training for Large Language Models. Our research reveals a significant gap between current NLP tools and the actual compliance needs of experts. This paper positions these valuable empirical findings as foundational work for building the next generation of Human-Centered, Compliance-Driven Natural Language Processing for Regulatory Technology (RegTech), providing a critical human-centered perspective and design requirements for engineering NLP systems that can proactively support expert compliance workflows.


Closer to Language than Steam: AI as the Cognitive Engine of a New Productivity Revolution

Fang, Xinmin, Tao, Lingfeng, Li, Zhengxiong

arXiv.org Artificial Intelligence

Artificial Intelligence (AI) is reframed as a cognitive engine driving a novel productivity revolution distinct from the Industrial Revolution's physical thrust. This paper develops a theoretical framing of AI as a cognitive revolution akin to written language - a transformative augmentation of human intellect rather than another mechanized tool. We compare AI's emergence to historical leaps in information technology to show how it amplifies knowledge work. Examples from various domains demonstrate AI's impact as a driver of productivity in cognitive tasks. We adopt a multidisciplinary perspective combining computer science advances with economic insights and sociological perspectives on how AI reshapes work and society. Through conceptual frameworks, we visualize the shift from manual to cognitive productivity. Our central argument is that AI functions as an engine of cognition - comparable to how human language revolutionized knowledge - heralding a new productivity paradigm. We discuss how this revolution demands rethinking of skills, organizations, and policies. This paper, balancing academic rigor with clarity, concludes that AI's promise lies in complementing human cognitive abilities, marking a new chapter in productivity evolution.


Generative AI Uses and Risks for Knowledge Workers in a Science Organization

Wagman, Kelly B., Dearing, Matthew T., Chetty, Marshini

arXiv.org Artificial Intelligence

Generative AI could enhance scientific discovery by supporting knowledge workers in science organizations. However, the real-world applications and perceived concerns of generative AI use in these organizations are uncertain. In this paper, we report on a collaborative study with a US national laboratory with employees spanning Science and Operations about their use of generative AI tools. We surveyed 66 employees, interviewed a subset (N=22), and measured early adoption of an internal generative AI interface called Argo lab-wide. We have four findings: (1) Argo usage data shows small but increasing use by Science and Operations employees; Common current and envisioned use cases for generative AI in this context conceptually fall into either a (2) copilot or (3) workflow agent modality; and (4) Concerns include sensitive data security, academic publishing, and job impacts. Based on our findings, we make recommendations for generative AI use in science and other organizations.


The top 3 ways to use generative AI to empower knowledge workers

MIT Technology Review

When it comes to AI at Adobe, my team has taken a comprehensive approach that includes investment in foundational AI, strategic adoption, an AI ethics framework, legal considerations, security, and content authentication. The rollout follows a phased approach, starting with pilot groups and building communities around AI. This approach includes experimenting with and documenting use cases like writing and editing, data analysis, presentations and employee onboarding, corporate training, employee portals, and improved personalization across HR channels. The rollouts are accompanied by training podcasts and other resources to educate and empower employees to use AI in ways that improve their work and keep them more engaged. While there are innumerable ways that CIOs can leverage generative AI to help surface value at scale for knowledge workers, I'd like to focus on digital documents--a space in which Adobe has been a leader for over 30 years.


ProcessGPT: Transforming Business Process Management with Generative Artificial Intelligence

Beheshti, Amin, Yang, Jian, Sheng, Quan Z., Benatallah, Boualem, Casati, Fabio, Dustdar, Schahram, Nezhad, Hamid Reza Motahari, Zhang, Xuyun, Xue, Shan

arXiv.org Artificial Intelligence

Generative Pre-trained Transformer (GPT) is a state-of-the-art machine learning model capable of generating human-like text through natural language processing (NLP). GPT is trained on massive amounts of text data and uses deep learning techniques to learn patterns and relationships within the data, enabling it to generate coherent and contextually appropriate text. This position paper proposes using GPT technology to generate new process models when/if needed. We introduce ProcessGPT as a new technology that has the potential to enhance decision-making in data-centric and knowledge-intensive processes. ProcessGPT can be designed by training a generative pre-trained transformer model on a large dataset of business process data. This model can then be fine-tuned on specific process domains and trained to generate process flows and make decisions based on context and user input. The model can be integrated with NLP and machine learning techniques to provide insights and recommendations for process improvement. Furthermore, the model can automate repetitive tasks and improve process efficiency while enabling knowledge workers to communicate analysis findings, supporting evidence, and make decisions. ProcessGPT can revolutionize business process management (BPM) by offering a powerful tool for process augmentation, automation and improvement. Finally, we demonstrate how ProcessGPT can be a powerful tool for augmenting data engineers in maintaining data ecosystem processes within large bank organizations. Our scenario highlights the potential of this approach to improve efficiency, reduce costs, and enhance the quality of business operations through the automation of data-centric and knowledge-intensive processes. These results underscore the promise of ProcessGPT as a transformative technology for organizations looking to improve their process workflows.


is-generative-ai-the-new-white-collar-knowledge-worker

#artificialintelligence

Generative AI is transforming many industries, including entertainment, manufacturing, automotive, and knowledge-based. In knowledge-based industries, it has the potential to automate certain tasks, such as generating legal documents and automating financial analysis, that can increase the productivity of knowledge workers. A report by Research and Markets states generative AI is projected to become a $200.73 billion market by 2032. Recently, Bill Gates, in his blog post, said, "In the future, ChatGPT will be like having a white-collar worker available to assist you with various tasks," But since generative AI is still in its early stages, it has limitations and unintended consequences. While it can perform tasks, it cannot replace the reasoning abilities and cognitive flexibility of humans essential to white-collar knowledge work.


Generative AI And The Future Of Creative Jobs

#artificialintelligence

The sudden popularity of generative AI has re-generated a popular pre-pandemic preoccupation: How many jobs will AI destroy? Some prediction experts predicted a decade ago that almost half of U.S. jobs could be replaced by AI by 2023 (!) or, at most, by 2033, mainly impacting low-skill jobs (e.g., no more truck drivers because we will have self-driving trucks). Other crystal-ball observers argued that in contrast to previous waves of automation, we are entering a new era in which the most affected will be highly-skilled knowledge workers. The tight labor market of recent years has suppressed somewhat these dire predictions. The widespread excitement about generative AI, however, is bringing back the anxiety about jobs, especially the creative kind of jobs.


Did ChatGPT Really Pass Graduate-Level Exams?

#artificialintelligence

Way back in 2019--an eon ago in AI time--the New York Times reported an AI milestone: Aristo, a natural-language processing and reasoning system scored over 90% on parts of the New York Regents 8th Grade Science Exam, and over 83% on parts of the corresponding Grade 12 Science Exam. Aristo, the Times proclaimed, "is ready for high school science. I argued this at the time: "The truth is that while these systems perform well on specific language-processing tests, they can only take the test. None come anywhere close to matching humans in reading comprehension or other general abilities that the test was designed to measure." Moreover, such systems lack the basic commonsense understanding of the world that is assumed of humans taking the same tests.


ChatGPT is suddenly everywhere. Are we ready?

Engadget

For a product that its own creators, in a marketing pique, once declared "too dangerous" to release to the general public, OpenAI's ChatGPT is seemingly everywhere these days. The versatile automated text generation (ATG) system, which is capable of outputting copy that is nearly indistinguishable from a human writer's work, is officially still in beta but has already been utilized in dozens of novel applications, some of which extend far beyond the roles ChatGPT was originally intended for -- like that time it simulated an operational Linux shell or that other time when it passed the entrance exam to Wharton Business School. The hype around ChatGPT is understandably high, with myriad startups looking to license the technology for everything from conversing with historical figures to talking to historical literature, from learning other languages to generating exercise routines and restaurant reviews. But with these technical advancements come with a slew of opportunities for misuse and outright harm. And if our previous hamfisted attempts at handling the spread of deepfake video and audio technologies were any indication, we're dangerously underprepared for the havoc that at-scale, automated disinformation production will wreak upon our society.